Uncertainty measures for evidential reasoning I: A review

نویسندگان

  • Nikhil R. Pal
  • James C. Bezdek
  • Rohan Hemasinha
چکیده

This paper is divided into two parts. Part I discusses limitations o f the measures of global uncertainty o f Lamata and Moral and total uncertainty o f Klir and Ramer. We prove several properties o f different nonspecificity measures. The computational complexity o f different total uncertainty measures is discussed. The need for a new measure of total uncertainty is established in Part I. In Part II, we propose a set of intuitively desirable axioms for a measure of total uncertainty and then derive an expression for the same. Several theorems are proved about the new measure. The proposed measure is additive, and unlike other measures, has a unique maximum. This new measure reduces to Shannon's probabilistic entropy when the basic probability assignment focuses only on singletons. On the other hand, complete ignorance--basic assignment focusing only on the entire set, as a whole--reduces it to Hartley's measure o f information. The computational complexity of the proposed measure is O(N), whereas the previous measures are O(N 2 ). K E Y W O R D S : conflict, confusion, evidential reasoning, entropy, dissonance, speciflcily, uncertainty 1. I N T R O D U C T I O N C o n s i d e r a s imple e x p e r i m e n t wi th a s ix-faced die. S u p p o s e the d ie is t ho rough ly shaken and p l aced on a t ab le cove red with a box and you a re Address correspondence to James C. Bezdek, Division of Computer Science, University of West Florida, Pensacola, FL 32514. N. 1~ Pal is on leave from ISI, Calcutta, India. Received D e c e m b e r 1, 1991; accepted May 1, 1992. This research was supported by NSF IRI-9003252. International Journal of Approximate Reasoning 1992; 7:165-183 © 1992 Elsevier Science Publishing Co., Inc. 655 Avenue of the Americas, New York, NY 10010 0888-613X/92/$5.00 165 166 Nikhil R. Pal, James C. Bezdek, and Rohan Hemasinha asked to guess the top face of the die. To answer this question one faces a type of uncertainty that can be attributed to randomness present in the system (experiment). The best answer to this question might be to describe the status of the die in terms of a probability distribution over the different faces (if known). Uncertainty that arises due to randomness in the system is called probabilistic uncertainty. To make the system more complex, suppose an artificial vision system analyzes a digital image of the die and, based on the evidence gathered, suggests that the top face is 5 or 6 with a belief value (confidence) of, say, 0.8. In other words, based on the evidence the system is not able to specify the top face of the die exactly. This kind of uncertainty usually arises due to limitations of the evidence gathering and interpretation system. Uncertainty in this second situation is due to a difficulty in specifying the exact solution and is called nonspecificity in the literature. Finally, suppose you are asked to interpret the top most face of the die as, high or low. In this case a different type of uncertainty (ambiguity) is faced. Here the existence of a fuzzy event high is assumed and one has to gauge the extent to which this event has occurred. In a probabilistic experiment, an event either occurs or not, but here an event may occur partially. This type of uncertainty arises due to the presence of fuzziness in the system. It is clear that fuzzy uncertainty differs from probability and nonspecificity. Fuzzy uncertainty deals with situations where boundaries of the sets under consideration are not sharply defined--partial occurrence of an event. On the other hand, for probabilistic and nonspecific uncertainties there is no ambiguity about set-boundaries, but rather, about the belongingness of elements or events to crisp sets. The present study confines itself only to non-fuzzy uncertainties. The literature is quite rich on fuzzy uncertainty; interested readers may refer to [1-4]. Intuitively one feels that uncertainty due to nonspecificity is related to probabilistic uncertainty. It is very difficult to look at either of them in isolation. For example, when belief values are assigned only to singleton elements, there is no uncertainty due to nonspecificity, only randomness may be present. But if a belief value is assigned to a set with cardinality more than one, then there is only nonspecificity. On the other hand, when belief values are assigned to more than one set that are not singletons, one faces both randomness and nonspecificity. In our discussion, we restrict ourselves to a finite reference set X, and an unknown element x belonging to X. It is also assumed that all information about the belongingness of x to X, if available, is expressible in the language of the Dempster-Shafer [5] theory of evidence. Several authors have suggested different measures for uncertainty. Yager [6] proposed a measure called dissonance or conflict, while Hohle [7, 8] Uncertainty Measures I 167 suggested a measure to quantify the level of confusion present in a body of evidence. Smets [9] suggested a different type of measure for the information content of an evidence. Unlike the measures of Yager [6] and Hohle [7, 8], the measure of Smets is not a generalization of Shannon's entropy. Higashi and Klir [10] proposed a measures of nonspecificity for a possibility distribution that was later extended to any body of evidence by Dubois and Prade [11]. Recently Klir and Ramer [12] pointed out some limitations of the measure of conflict (confusion) of Hohle [7, 8], and suggested a new measure for the same. We give an example which shows that this measure also leads to an unappealing situation, and we prove several theorems on different nonspecificity measures. Lamata and Moral [13] proposed two composite measures, called global uncertainty measures, that attempt to quantify both the probabilistic and nonspecific aspects of uncertainty. One of their measures simply adds the dissonance measure of Yager to the nonspecificity measure of Dubois and Prade; whereas the other one is introduced via definition without prior motivation or justification. Klir and Ramer subsequently suggested another composite measure called total uncertainty that is defined as the sum of Dubois and Prade's nonspecificity and a new measure of conflict called discord [12]. Composite measures reflect some interesting aspects of uncertainty, but analysis reveals that they can lead to intuitively unappealing situations when interpreted as total uncertainty. For example, all of these composite measures have several maxima, which makes it difficult to gauge the quality of evidence based on their numerical values. Moreover, the computational overhead for each of these measures is high. Finally, and most importantly, elementary measures of nonspecificity or probabilistic uncertainty such as dissonance, discord, or nonspecificity attempt to measure only one of the two aspects of non-fuzzy uncertainty, so interpretation of a composite measure such as the total or global uncertainty is difficult. Aggregation of elementary measures may depend on the mode of interaction between different aspects of uncertainty represented by the components in the sum. Because nonspecificity and randomness are related in an unknown way, it does not seem desirable to add expressions for these measures directly to get a measure for total uncertainty. In Part I of this paper we evaluate existing measures of uncertainty and prove several new properties about some of them. We show that existing measures cannot model situations when no evidence is better than inconsistent evidence. To elucidate this point, consider the following two situations: First, an expert is completely ignorant about the unknown element (because of insufficient or no evidence), and the confidence (belief) assigned to the universal set X is 1; and second, based on the evidence, an expert assigns a belief value of 1//(2 ~ 1) to each of the possible nonempty subsets of X. In the former case, the expert is confident about his or her 168 Nikhil R. Pal, James C. Bezdek, and Rohan Hemasinha ignorance; whereas in the latter case, the expert seems confused. Because the basic assignment in the second case is expected to be inconsistent, the total uncertainty in the second case should be larger than in the first case. However, none of the existing measures support this position. All of the problems itemized above motivate us to look for a new measure of average total uncertainty. Our approach will be to postulate a set of axioms that seem desirable for any measure of total uncer ta inty--as opposed to axioms related to only one component in a composite s u m a n d then to derive a function that satisfies the axioms. The measure of total uncertainty we discover will account for the expert 's dilemma given above. Several theoretical properties of the new measure are studied, and a numerical example is given that affords an empirical comparison with previous measures. Unlike other measures of total uncertainty, the new measure has a unique maximum. Shannon's probabilistic entropy and Hartley's entropy are shown to be special cases of our measure. Finally, we show that the new measure is computationally more tractable than previous measures. 2. BASIC TERMINOLOGY In this section we introduce the basic terminology and definitions of the Dempster-Shafer theory of evidence. Let X be a finite universe of discourse, IXI = n , e ( x ) the power set of X, and x any element in X. All information about the belongingness of x to X is expressible by a basic probability assignment (BPA) function m: P(X) ~ [0, 1] that satisfies: m(~b) = 0 (~b = emptyse t ) (1) and E m(A) = 1. (2) Ac_X The value m(A) represents the degree of evidence or belief that the element x in question belongs exactly to the set A but not to any B such that B c A . The pair (F, m) is called the body of evidence for x, where F is the set of all subsets A of X such that re(A) > 0. Elements of F are called focal elements. If the focal elements are nested (i.e., can be arranged in a sequence such as A 1 c A 2 c ... c A k , . . . ) then the corresponding body of evidence is called a consonant body of evidence. In this context the following observations about a body of evidence may be made. If IF] = 1 and A ~ F then either ]A] = 1, and there is no uncertainty; or IAI > 1, and there is uncertainty due to nonspecificity. Conversely, IFI > 1 and [A[ = 1 for all A ~ F represents a situation with only randomness. In all other cases both randomness and nonspecificity will be present, because Uncertainty Measures I 169 when I F I > 1 and [AI> 1, at least for some A ~ F , the element in question can be in any one of the sets (focal elements) and given the focal set, it can be any member of the set. Sharer [5] defined two fuzzy measures on a body of evidence (F, rn), namely, Belief ( Bel) and Plausibility (PI) as follows: B e l ( A ) = ~ m ( B ) ; (3) B c A ~ F and Pl (A) = 1 B e l ( A c) = Y'~ r e ( B ) (4) A A B .g d# where A c is the complement of A. Shafer also defined the Commonality Number of A as follows: C m ( A ) = ~ m(B) (5) A ~ B Conceptually Bel(A) represents the total degree of evidence that the concerned element belongs to A a n d / o r some of its subsets. On the other hand, PI(A) not only gives the total degree of evidence that the concerned element belongs to A or some of its subsets, but also to those sets that have nonempty intersections with A. Properties of these two measures can be found in [1, 5, 14]. A belief function (Bel) is called a vacuous belief function if Bel(X) = 1 and Bel(A) = 0 for A 4= X. For a vacuous belief function m ( X ) = 1 and m(A) = 0 for A 4: X. For a consonant body of evidence the belief and plausibility measures are called necessity and possibility measures, respectively. The commonality function Cm(A) gathers pieces of evidence supported by A. The commonality function plays the role of belief for a conjunctive body of evidence [14]. Any of the four set functions m, Bel, Pl and Cm can be expressed uniquely in terms of any other [5]. Lastly, we note that every possibility measure ~on P ( X ) is uniquely determined by a possibility distribution function r: X ~ [0, 1] via the formula:

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The Generation of Explanations within Evidential Reasoning Systems

One of the most highly touted virtues of knowledge-based expert systems is their abil i ty to construct explanations of deduced lines of reasoning. However, there is a basic difficulty in generating explanations in expert systems that reason under uncertainty using numeric measures. In particular, systems based upon evidential reasoning using the theory of belief functions have lacked any facil...

متن کامل

Comprehensive Decision Modeling of Reverse Logistics System: A Multi-criteria Decision Making Model by using Hybrid Evidential Reasoning Approach and TOPSIS (TECHNICAL NOTE)

In the last two decades, product recovery systems have received increasing attention due to several reasons such as new governmental regulations and economic advantages. One of the most important activities of these systems is to assign returned products to suitable reverse manufacturing alternatives. Uncertainty of returned products in terms of quantity, quality, and time complicates the decis...

متن کامل

Feature Fusion Based on Dempster-shafer's Evidential Reasoning for I Mage Texture Classification*

A new multi-feature fusion technique based on Dempster-Shafer's evidential reasoning for classification of image texture is presented. The proposed technique is divided into three main steps. In the first step, the fractal dimension and gray co-occurrence matrix entropy are extracted from a texture image. In the second step, we focus on how to design a probability assignment function m(A) repre...

متن کامل

Modelling the Level of Adoption of Analytical Tools; An Implementation of Multi-Criteria Evidential Reasoning

In the future, competitive advantages will be given to organisations that can extract valuable information from massive data and make better decisions. In most cases, this data comes from multiple sources. Therefore, the challenge is to aggregate them into a common framework in order to make them meaningful and useful.This paper will first review the most important multi-criteria decision analy...

متن کامل

Metaprobability and Dempster-Shafer in Evidential Reasoning

Evidentia.l rea.. '>oning in ex pert systems has often used ad-hoc uncertainty calculi. Although it is generally accepted that probability theory provides a firm theoretical fo undat ion, researchers have found some problems with its usc as a workable uncertainty calculus. Among these problems arc representation ol' ignorance, consistency of probabilistic judgements, and adjustment of a priori ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Int. J. Approx. Reasoning

دوره 7  شماره 

صفحات  -

تاریخ انتشار 1992